807 research outputs found

    Distributed representations accelerate evolution of adaptive behaviours

    Get PDF
    Animals with rudimentary innate abilities require substantial learning to transform those abilities into useful skills, where a skill can be considered as a set of sensory - motor associations. Using linear neural network models, it is proved that if skills are stored as distributed representations, then within- lifetime learning of part of a skill can induce automatic learning of the remaining parts of that skill. More importantly, it is shown that this " free- lunch'' learning ( FLL) is responsible for accelerated evolution of skills, when compared with networks which either 1) cannot benefit from FLL or 2) cannot learn. Specifically, it is shown that FLL accelerates the appearance of adaptive behaviour, both in its innate form and as FLL- induced behaviour, and that FLL can accelerate the rate at which learned behaviours become innate

    Topological inference for EEG and MEG

    Full text link
    Neuroimaging produces data that are continuous in one or more dimensions. This calls for an inference framework that can handle data that approximate functions of space, for example, anatomical images, time--frequency maps and distributed source reconstructions of electromagnetic recordings over time. Statistical parametric mapping (SPM) is the standard framework for whole-brain inference in neuroimaging: SPM uses random field theory to furnish pp-values that are adjusted to control family-wise error or false discovery rates, when making topological inferences over large volumes of space. Random field theory regards data as realizations of a continuous process in one or more dimensions. This contrasts with classical approaches like the Bonferroni correction, which consider images as collections of discrete samples with no continuity properties (i.e., the probabilistic behavior at one point in the image does not depend on other points). Here, we illustrate how random field theory can be applied to data that vary as a function of time, space or frequency. We emphasize how topological inference of this sort is invariant to the geometry of the manifolds on which data are sampled. This is particularly useful in electromagnetic studies that often deal with very smooth data on scalp or cortical meshes. This application illustrates the versatility and simplicity of random field theory and the seminal contributions of Keith Worsley (1951--2009), a key architect of topological inference.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS337 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Transient Resetting: A Novel Mechanism for Synchrony and Its Biological Examples

    Get PDF
    The study of synchronization in biological systems is essential for the understanding of the rhythmic phenomena of living organisms at both molecular and cellular levels. In this paper, by using simple dynamical systems theory, we present a novel mechanism, named transient resetting, for the synchronization of uncoupled biological oscillators with stimuli. This mechanism not only can unify and extend many existing results on (deterministic and stochastic) stimulus-induced synchrony, but also may actually play an important role in biological rhythms. We argue that transient resetting is a possible mechanism for the synchronization in many biological organisms, which might also be further used in medical therapy of rhythmic disorders. Examples on the synchronization of neural and circadian oscillators are presented to verify our hypothesis.Comment: 17 pages, 7 figure

    An active inference account of protective behaviours during the COVID-19 pandemic

    Get PDF
    Newly emerging infectious diseases, such as the coronavirus (COVID-19), create new challenges for public healthcare systems. Before effective treatments, countering the spread of these infections depends on mitigating, protective behaviours such as social distancing, respecting lockdown, wearing masks, frequent handwashing, travel restrictions, and vaccine acceptance. Previous work has shown that the enacting protective behaviours depends on beliefs about individual vulnerability, threat severity, and one's ability to engage in such protective actions. However, little is known about the genesis of these beliefs in response to an infectious disease epidemic, and the cognitive mechanisms that may link these beliefs to decision making. Active inference (AI) is a recent approach to behavioural modelling that integrates embodied perception, action, belief updating, and decision making. This approach provides a framework to understand the behaviour of agents in situations that require planning under uncertainty. It assumes that the brain infers the hidden states that cause sensations, predicts the perceptual feedback produced by adaptive actions, and chooses actions that minimize expected surprise in the future. In this paper, we present a computational account describing how individuals update their beliefs about the risks and thereby commit to protective behaviours. We show how perceived risks, beliefs about future states, sensory uncertainty, and outcomes under each policy can determine individual protective behaviours. We suggest that these mechanisms are crucial to assess how individuals cope with uncertainty during a pandemic, and we show the interest of these new perspectives for public health policies

    Nonlinear spatial normalization using basis functions

    Get PDF

    Nonlinear spatial normalization using basis functions

    Get PDF

    A Phenomenological Theory of Spatially Structured Local Synaptic Connectivity

    Get PDF
    The structure of local synaptic circuits is the key to understanding cortical function and how neuronal functional modules such as cortical columns are formed. The central problem in deciphering cortical microcircuits is the quantification of synaptic connectivity between neuron pairs. I present a theoretical model that accounts for the axon and dendrite morphologies of pre- and postsynaptic cells and provides the average number of synaptic contacts formed between them as a function of their relative locations in three-dimensional space. An important aspect of the current approach is the representation of a complex structure of an axonal/dendritic arbor as a superposition of basic structures—synaptic clouds. Each cloud has three structural parameters that can be directly estimated from two-dimensional drawings of the underlying arbor. Using empirical data available in literature, I applied this theory to three morphologically different types of cell pairs. I found that, within a wide range of cell separations, the theory is in very good agreement with empirical data on (i) axonal–dendritic contacts of pyramidal cells and (ii) somatic synapses formed by the axons of inhibitory interneurons. Since for many types of neurons plane arborization drawings are available from literature, this theory can provide a practical means for quantitatively deriving local synaptic circuits based on the actual observed densities of specific types of neurons and their morphologies. It can also have significant implications for computational models of cortical networks by making it possible to wire up simulated neural networks in a realistic fashion

    Attention, Uncertainty, and Free-Energy

    Get PDF
    We suggested recently that attention can be understood as inferring the level of uncertainty or precision during hierarchical perception. In this paper, we try to substantiate this claim using neuronal simulations of directed spatial attention and biased competition. These simulations assume that neuronal activity encodes a probabilistic representation of the world that optimizes free-energy in a Bayesian fashion. Because free-energy bounds surprise or the (negative) log-evidence for internal models of the world, this optimization can be regarded as evidence accumulation or (generalized) predictive coding. Crucially, both predictions about the state of the world generating sensory data and the precision of those data have to be optimized. Here, we show that if the precision depends on the states, one can explain many aspects of attention. We illustrate this in the context of the Posner paradigm, using the simulations to generate both psychophysical and electrophysiological responses. These simulated responses are consistent with attentional bias or gating, competition for attentional resources, attentional capture and associated speed-accuracy trade-offs. Furthermore, if we present both attended and non-attended stimuli simultaneously, biased competition for neuronal representation emerges as a principled and straightforward property of Bayes-optimal perception
    corecore